Online Classification : Perceptron and Winnow
نویسنده
چکیده
In this lecture we will start to study the online learning setting that was discussed briefly in the first lecture. Unlike the batch setting we have studied so far, where one is given a sample or ‘batch’ of training data and the goal is to learn from this data a model that can make accurate predictions in the future, in the online setting, learning takes place in a sequence of trials: on each trial, the learner must make a prediction or take some action, each of which can potentially result in some loss, and the goal is to update the prediction/decision model at the end of each trial so as to minimize the total loss incurred over a sequence of such trials.
منابع مشابه
Adaptive Learning Rate for Online Linear Discriminant Classifiers
We propose a strategy for updating the learning rate parameter of online linear classifiers for streaming data with concept drift. The change in the learning rate is guided by the change in a running estimate of the classification error. In addition, we propose an online version of the standard linear discriminant classifier (O-LDC) in which the inverse of the common covariance matrix is update...
متن کاملCombining Online Classification Approaches for Changing Environments
Any change in the classification problem in the course of online classification is termed changing environments. Examples of changing environments include change in the underlying data distribution, change in the class definition, adding or removing a feature. The two general strategies for handling changing environments are (i) constant update of the classifier and (ii) re-training of the clas...
متن کاملRegularized Winnow Methods
In theory, the Winnow multiplicative update has certain advantages over the Perceptron additive update when there are many irrelevant attributes. Recently, there has been much effort on enhancing the Perceptron algorithm by using regularization, leading to a class of linear classification methods called support vector machines. Similarly, it is also possible to apply the regularization idea to ...
متن کاملAn Online Ensemble of Classifiers
Along with the explosive increase of data and information, incremental learning ability has become more and more important for machine learning approaches. The online algorithms try to forget irrelevant information instead of synthesizing all available information (as opposed to classic batch learning algorithms). Nowadays, combining classifiers is proposed as a new direction for the improvemen...
متن کاملEfficiency versus Convergence of Boolean Kernels for On-Line Learning Algorithms
We study online learning in Boolean domains using kernels which capture feature expansions equivalent to using conjunctions over basic features. We demonstrate a tradeoff between the computational efficiency with which these kernels can be computed and the generalization ability of the resulting classifier. We first describe several kernel functions which capture either limited forms of conjunc...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2012